A Linear Logical Framework
نویسندگان
چکیده
syntax Concrete syntax Kinds type type x :A:K {x:A}K A -> K K Types A&B; A & B A B A -o B B oA x :A:B {x:A}B A -> B B M Objects sndM M ̂x :A:M [x^A]M M^N M ^ N x :A:M [x:A]M M N M N The next table gives the relative precedence and associativity of these operators. As in Elf, parentheses are available to override these behaviors. Precedence Operator Position highest pre x ^ left associative & right associative -o -> right associative o& presented in the previous section retains all the desirable properties of LF and also augments this formalism with linear assumptions, admitting volatile manipulations, and with a suitable set of operators to manage them. These new features overcome the above de ciency of LF : if we represent the volatile context of an object language as linear assumptions in >& , destructive context operations in the object formalism can be modeled by an appropriate combination of linear operators. The linear logical framework LLF is founded on the type theory >& and combines as its meta-representation methodology the judgments-as-types technique of LF with the above observation. The present section illustrates the added expressiveness of LLF as a logical framework by describing the meta-representation methodology it adopts, rst abstractly and then on a concrete case study. The formalism we want to represent is an imperative extension of Mini-ML [HM90, MP91, Pfe92], a purely functional restriction of the programming language ML [GMW79, MTHM97]. More precisely, we augment that language with a store and imperative instructions to access and modify the values it contains, we formalize the typing and evaluation semantics of these constructs and we show that this extended language enjoys the type preservation property. We call this language MLR, for Mini-ML with References. The linear assumptions of LLF can be used to encode individual memory cells and the linear operators of our type theory o er e ective tools to model manipulations on them. We review the judgments-as-types representation methodology and extend it to handle volatile assumptions in Section 3.1. Then, we give a detailed but informal presentation of the syntax, semantics and of the type preservation property for MLR in Section 3.2. Finally, we show how to encode these di erent aspects in LLF in Section 3.3. Appendix A contains the complete LLF signature for this example. In the following, we will concentrate mainly on the novel constructions available in MLR, referring the reader to the literature [CDDK86, HM90, MP91, Pfe92] for aspects already present in Mini-ML. 3.1 Judgments-as-Types Revisited We will review the technique of judgments-as-types of LF [HHP93] by analyzing the following simpli ed rule of inference from the case study in this section: ; x : `e e : tpe x `e x x: e : Ignoring for the moment the context , it speci es that the x-point expression x x: e has type if e has type assuming that the variable x has also type . We will emphasize the fact that x can occur in e by writing e(x). Given a closed expression x x: e(x), the judgment in the conclusion of tpe x postulates that x x: e(x) has type (we need to provide a derivation to ascertain that this is indeed the case). We call such a judgment simple. The judgments-as-types representation methodology encodes simple judgments as base types. In Section 3.3, we will use the type family constants exp and tp, both of kind type to classify the expressions and the types of the object language, respectively. The general form of the typing judgment 39 above relates an expression and a type, and therefore we encode it as a type family tpe, of kind exp! tp! type. Given representations (fix x :exp: peq x) and p q (to be explained below) for the closed expression x x: e(x) and for the object language type , the simple judgment `e x x: e(x) : is represented as tpe (fix ( x :exp: peq x)) p q: The judgment in the premise of rule tpe x is di erent in nature. Indeed, it speci es that the expression e(x) has type if we assume that the variable x has also type . A judgment of this form is called hypothetical. Notice also that x is a bound variable in x x: e(x), but it is free in e(x). Therefore, that premise expresses the fact that e(x) has type for a generic expression x of type . The judgment ; x : `e e(x) : is therefore said to be parametric in x. The judgments-as-types representation methodology encodes hypothetical and parametric judgments by means of simple and dependent function types respectively. The premise of the rule above, which is parametric in x and hypothetical in x : , is represented as follows: x :exp:tpe x p q ! tpe (peq x) p q Notice that instantiating the parameter x with some term e0 yields a hypothetical judgment postulating that e(e0) has type assuming that e0 has type . This reduces to a simple judgment as soon as we provide a derivation for this hypothesis. An attempt at nding a canonical LF derivation with the above type reduces to searching for a derivation for the base type tpe (peq x) p q after having added the assumptions x : exp and tx :tpe x p q to the context of LF . Viewing this as an alternate encoding for the premise of rule tpe x illustrates the manner an object context is encoded according to the judgmentsas-types methodology: each item in the context of the object formalism is represented as one or more assumptions in the context of LF . This technique o ers the further advantage that we can rely on the primitive operations of LF to simulate the lookup of object level assumptions. Less sophisticated representations, for example those that encode the object context as a term, must provide explicit access operations. Observe that rule tpe x can be read as a judgment that is parametric in the (functional) expression e and the type , and hypothetical in the derivability of its premise. Indeed, it is encoded as the following declaration: tpe fix : e :exp! exp: :tp: ( x :exp:tpe x ! tpe (e x) ) ! tpe (fix ( x :exp: e x)) or, taking advantage of the concrete syntax of Elf (see Section 2.9), tpe_fix : ({x:exp} tpe x T -> tpe (E x) T) -> tpe (fix ([x:exp] E x)) T. 40 In summary, the judgments-as-types representation methodology for LF encodes simple judg-ments as base types, hypothetical and parametric judgments as simple and dependent functiontypes respectively, and element of the object context as items in the context of LF . Moreover,derivations for a simple judgments are naturally represented as terms of the corresponding basetype.The judgments-as-types methodology interacts particularly well with higher-order abstractsyntax, a technique for the representation of the syntactic level of an object formalism thatencodes object variables as meta-variables and relies on the -abstraction of to emulategeneric object-level binding constructs. Above, we encoded the x-point expression x x: e(x),that bind the variable x in e(x) as (fix ( x :exp: peq x)). We used the -abstraction of LF toexpress binding, and consequently encoded the operator x by means of the LF constant fixthat accepts a functional operator (fix : (exp -> exp) -> exp).The faithfulness of the representation of an object formalism is captured by means of ad-equacy theorems that relate the entities being represented to their encoding. An importantadvantage of the judgments-as-types technique with respect to less sophisticated approachesis that it produces encodings very close to the notations being formalized. This makes theadequacy theorems easy to prove.Here, and in the remainder of this paper, we view and describe operations on the context asthey arise when we construct derivations \bottom-up", that is, from the judgment in questiontowards the axioms. This view is the most natural one to elucidate the examples and anticipatesthe logic programming interpretation of LLF . For example, instead of saying that we dischargea hypothesis in rule opc ilam in Figure 2 we say that we introduce a hypothesis. From thispoint of view, o ers two operations on its context: insertion and lookup. In particular,the context can only grow during the bottom-up construction of a derivation. Therefore, thejudgments-as-types methodology in cannot capture object languages that perform deletionon their context. Consider as an example the following inference rule, taken from the case studyin the next section:(S; c = v) K ` return hi ,! a ev assign 2(S; c = v0) K ` c := 2 v ,! aThis rule describes the semantics of assignment in an imperative programming language (furtherdetails will be given in the next section). It speci es that, in order to assign the value v to thecell c, we must update the binding c = v0 in the store with c = v; some uninteresting value isreturned. An elegant encoding of this system in LF would represent each cell-value pair in thestore as a meta-level assumption. However, does not provide means to simulate the deletionof the old binding, c = v0.In contrast, we can easily achieve this e ect in LLF . Indeed, looking up a linear assumptionin&> removes it from the context. This suggests encoding each cell-value pair c = vpresent at any instant in the store of the object language as an LLF linear assumption Cn :̂contains c pvq.The linear type constructors of&> provide the necessary means to manipulate such41 assumptions. We rely on to enter them in the context of LLF and take advantage of thecontext splitting semantics of this operator to isolate them in order to access them. The additiveproduct type constructor, &, o ers means to duplicate or share linear assumptions among its twoconjuncts. This operator can also be used to express selection between exclusive alternatives,although we will not take advantage of this feature here. Finally, the unit type, >, permitsdiscarding unused linear hypotheses.These di erent features will be illustrated in detail in Section 3.3. We just show the encodingof the rule above:ev_assign*2 : (contains C V -o ev K (return unit) A)-o (contains C V' -o ev K (assign*2 (rf C) V) A).The linearity of our logical framework can be integrated into higher-order abstract syntaxas a convenient manner of encoding languages relying on linear binders [Cer96]. When they arenot needed we can just use the LF fragment of LLF exactly as before.3.2 Mini-ML with ReferencesCritical choices in the implementation of programming languages depend on the validity of meta-theoretic properties. Type preservation in Standard ML [GMW79, MTHM97], for example,guarantees that no typing error can arise during evaluation; therefore execution can be spedup signi cantly by disregarding type information at run-time. Meta-theoretic properties in thepresence of non-functional features, included in most concrete languages, are di cult to proveand therefore the formal analysis of imperative extensions of purely functional programminglanguages has received great attention in the literature. The addition of references and theirinteraction with polymorphism has been analyzed with di erent tools, ranging from the complexdomain-theoretic approach of Damas [Dam85] to the syntactic formulation of Harper [Har94].The latter idea was adapted from Wright and Felleisen, who additionally consider continuationsand exceptions [WF94].The proofs of these properties are long and error-prone. Therefore, recent work has investi-gated the possibility of partially automating their generation or at least their veri cation. Chir-imar gives Forum speci cations for a language with references, exceptions, and continuationsand uses the meta-theory of Forum [Mil94] to study program equivalence [Chi95]. VanInwe-gen [Van96] formally proves properties such as value soundness (the fact that evaluating anexpression yields a value, if it terminates) for most of Standard ML with the help of the HOLtheorem prover [GM93].In this section, we de ne MLR as an extension of Mini-ML with references and impera-tive instructions, and study aspects of its meta-theory. Although our principal objective is todemonstrate the expressive power of LLF, our presentation di ers in some aspects from theformulations and proofs in the literature and therefore might be interesting in itself. We willpoint out di erences and similarities with other approaches as they arise.42 Expressions and StoreSince its introduction in [CDDK86], the language Mini-ML and variants of it have been usedfor case studies in the presentation of logical frameworks [HM90, MP91, Pfe92]. Mini-ML isa purely functional restriction of the programming language ML [GMW79, MTHM97]. Morespeci cally, it is a small statically typed functional programming language including numerals,conditional expressions, pairs, polymorphic de nitions, recursion, and functional expressions.We consider an extension of Mini-ML with a store and imperative instructions in the style ofML to access and modify the values it contains. We call this language Mini-ML with References,orMLR for short. The store of anMLR program is de ned as a collection of cells each containinga value. We will sometimes use location or address as synonyms of cell. MLR makes availableall the constructs of Mini-ML but enriches the syntax of its expressions with the necessaryoperations to manipulate individual cells. The resulting language is speci ed by the followinggrammar, where we have separated out the constructs not present in standard presentationsof Mini-ML with a double bar (jj). Cells c and stores S are not directly accessible to theprogrammer, but it is customary and convenient to enrich the syntax in order to representintermediate stages during computation.Expressions: e ::= x(Variables)j z j s e j case e of z ) e1 j s x ) e2(Natural numbers)j hi(Unit element)j he1; e2i j fst e j snd e(Pairs)j lam x: e j e1 e2(Functions)j letval x = e1 in e2 j letname x = e1 in e2 (De nitions)j x x: e(Recursion)jj c j ref e j !e(References)j e1 := e2 j e1; e2(Commands)Stores: S ::= j S; c = vIn these productions, c ranges over the lexical category of memory locations, while we use theletter x for variables. The meta-variable v denotes values, that we will de ne shortly. We willtreat stores as multisets, omit the leading from a non-empty store, and overload ; to denotethe union of two stores. Finally, we require the cells appearing on the left-hand side of a storeitem to be distinct.The polymorphism in MLR is restricted to values, which is generally accepted as superior tothe imperative type variables present in previous versions of SML [LW91]. We achieve this bydistinguishing two forms of let. The expression ref e dynamically allocates a cell and initializesit with the value of e. The contents of a cell can be inspected by dereferencing it with ! andmodi ed with an assignment (:=). Di erently from [WF94], but consistently with the mainstream in the literature (including the de nition of Standard ML [MTHM97]), we choose thisoperation not to return the assigned object, but the unit element hi. The sequencing operator(;) is typically used as a means of chaining a series of assignments with some interesting nalvalue; it is syntactic sugar for the expression (letval x = e1 in e2) when x does not occur in e2.43 As is normally the case in functional languages, MLR does not o er explicit means to deallocatememory cells.All these constructs are available in Standard ML [MTHM97] with the exception of addressesthemselves (c), which cannot be manipulated directly in that language. We require MLR pro-grams not to mention locations directly so that cells are always guaranteed to be initialized.Thus cells are created dynamically with ref and can be named by binding them to variableswith one of the two let constructs of MLR.As inML, the reference cells of MLR encompass two distinct features of imperative program-ming languages such as C or Pascal. First of all, they play the role of the imperative variablesof these languages and can be used as such (except for the necessity of dereferencing them ex-plicitly in order to access their value). Second, we can use them as pointers in data structures,although their usefulness is rather limited in this respect due to the absence of recursive datatypes in MLR. Such data structures could be easily added to the language.TypingThe language of types of MLR augments the typing constructs typically present in Mini-ML,namely natural numbers, unit, pairing, and functional types, with one new constructor: for eachtype , the type ref for references to objects of type . The syntax of types is summarized inthe following grammar:Types:::= j nat j 1 j 1 2 j 1 ! 2jj refWe use type variables to express schematic polymorphism. We eliminate an explicit quanti erin favor of substitution in the typing rule for the letname construct (see Figure 7). On thebasis of this de nition, the static semantics of MLR naturally extends the traditional typingrules of Mini-ML. The possibility of expressions to mention cells requires introducing a storecontext as a means to declare the type of free locations. More precisely, the item c : in a storecontext declares as the type of the values that c can contain; c itself has consequently typeref. Contexts, as usual, assign types to free variables. They are constructed according to thefollowing grammar:Contexts:::= j ; x :Store contexts:::= j ; c :We rely on the usual convention that the names of the variables and the cells declared in storesand context stores, respectively, are distinct. Moreover, we treat both forms of contexts asmultisets.We express the fact that the MLR expression e has type with respect to a store contextand a context with the judgment; `e e : :44 Expressionstpe x; ; x : `e x :tpe unit; `e hi : 1tpe z; `e z : nat; `e e : nat tpe s; `e s e : nat; `e e : nat ; `e e1 :; ; x :nat `e e2 : tpe case; `e case e of z ) e1 j s x ) e2 :; `e e1 : 1 ; `e e2 : 2 tpe pair; `e he1; e2i : 1 2; `e e : 1 2 tpe fst; `e fst e : 1; `e e : 1 2 tpe snd; `e snd e : 2; ; x : 1 `e e : 2 tpe lam; `e lam x: e : 1 ! 2; `e e1 : 2 ! 1 ; `e e2 : 2 tpe app; `e e1 e2 : 1; `e e1 : 1 ; ; x : 1 `e e2 : 2 tpe letval; `e letval x = e1 in e2 : 2; `e [e1=x]e2 :tpe letname; `e letname x = e1 in e2 :; ; x : `e e : tpe x; `e x x: e :. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .tpe cell; c : ; `e c : ref; `e e :tpe ref; `e ref e : ref; `e e : ref tpe deref; `e !e :; `e e1 : 1 ; `e e2 : 2 tpe seq; `e e1; e2 : 2; `e e1 : ref ; `e e2 : tpe assign; `e e1 := e2 : 1StoretpS empty`S :`S S : 0 ; `e v : tpS cell`S (S; c = v) : ( 0; c : )Figure 7: Typing Rules in MLR, Expressions and StoreThe presence of a store context in the typing rules for MLR is necessary even if we forbid theusers to write addresses directly in their programs. It accounts for cells dynamically allocatedduring evaluation, which may appear in intermediate results and in the nal answer.The inference rules for the typing judgment are displayed in Figure 7. The upper part of thisgure shows the rules for the functional core of MLR. The changes with respect to the usualrules for Mini-ML are limited to the systematic inclusion of a store context in the judgments.The central part of Figure 7 shows the rules for the novel features of MLR. As for thefunctional case, they express the conditions under which an expression can be statically acceptedas meaningful. For example, rule tpe deref enforces that only references be dereferenced.45 We present in the lower part of Figure 7 the rules for typing a store. The judgments weconsider have the form0 `S S :that we interpret as requiring that the type of each value v stored in S coincides with the typeof the corresponding cell as speci ed in . The store context 0 gives the type of the cells vmay mention. We will always be interested in top-level judgments of the form `S S :since a store will in general refer circularly to its own cells. Rule tpS cell prevents expressionscontaining free variables from being inserted in the store.EvaluationAn MLR expression e will in general mention reference cells whose values are contained in thestore. The evaluation of e will typically not only retrieve these values, but also change them orcreate new cells. Therefore, as e is evaluated, the store will undergo transformations, and bythe time a value for e is eventually produced, it might appear very di erent from the store westarted with. This observation suggests an evaluation judgment of the formS; e ,! S0; vwhere S is the store prior to evaluating e, and S0 results from the evaluation of e to v: cellsin e refer to S while cells in v refer to S0. This formulation extends the traditional evaluationjudgment for Mini-ML [HM90, MP91, Pfe92].The dynamic semantics of functional languages enriched with imperative features, such asMLR's references, is normally expressed in the literature in this manner. We will instead adopt adi erent strategy and present the reductions occurring during the execution of anMLR programas continuation-based evaluation rules. This choice has been dictated by our intention to encodethe semantics ofMLR in LLF . A direct representation of the judgment above, although possible,would have resulted in a less elegant encoding. For similar reasons, Chirimar [Chi95] also chosea continuation-based formulation.Di erently from more declarative formulations, a continuation-based execution strategy im-poses a strict order of evaluation on the di erent subexpressions of any given construct in thelanguage. This order respects the expected ow of data and is therefore natural. For example,when computing the value of an expression of the form (letval x = e1 in e2) we will rstevaluate e1, obtain a value v0, substitute it for x in e2 and only then evaluate the resultingexpression.An e ective implementation of this strategy requires sequentializing the evaluation of thesubexpressions of constructs with more than one argument. One of them is evaluated imme-diately while the evaluation of the others is postponed until a value has been produced for it.Clearly, if a subexpression depends on the value of another, we process it last. We realize thisidea by maintaining a stack of expressions to be evaluated, called a continuation.Postponing the evaluation of an expression e2 in favor of another expression e1 is achievedby pushing the former into the continuation. Since, as when evaluating (letval x = e1 in e2)46 for example, the value of e1 might need to be substituted for some free variable x in e2, we wrapa binder for x around e2 and thus insert an object of the form x: e2 into the continuation (orcompose it with the current continuation, depending on whether the continuation is viewed as astack of functions, or as a single function corresponding to their composition). For uniformity,it is convenient to take this measure every time we insert an item into the stack. As soon as e1has been fully evaluated to a value v, x: e2 is extracted from the continuation, v is substitutedfor the variable x in e2, and [v=x]e2 is evaluated in turn.The necessity of distinguishing expressions still to be evaluated from values being returned re-quires the introduction of the new syntactic layer of instructions. Speci cally, we write eval e forthe request to evaluate an expression e and denote the intention to return a value v as return v.Instructions are needed also for the purpose of handling partially evaluated expressions.While evaluating a Mini-ML expression simply yielded a value, MLR expressions will ingeneral produce objects mentioning cells. Therefore the result of the evaluation of an instructioni must include not only a nal value v but also a rei cation [S0] of the nal store S0 it drawsits references from; moreover, as a measure of hygiene, we mark the cells c that have beenintroduced during the evaluation process by binding them in front of the pair ([S0]; v) by meansof the new c: operator. The resulting object is called an answer and is indicated with the lettera. For our purposes, [S0] will be a sequence obtained by ordering the elements of S0 accordingto some arbitrary order. It is however conceivable that only the cells that contribute to the nalvalue be kept, realizing in this way a form of garbage collection.The structure of instructions, continuations and answers is given by the following grammar,where we have indicated with the double bar the instructions introduced in correspondence tothe imperative constructs of MLR.Instructions: i ::= eval e j return vj case v of z ) e1 j s x ) e2j hv; ei j fst v j snd vj app v ejj ref v j deref v j v := 1 e j v1 :=2 v2Continuations: K ::= init j K; x: iAnswers: a ::= ([S]; v) j new c: aThe typing rules for objects in these three categories are displayed in Figure 8. Notice that thetype of an answer coincides with the type of the embedded value. Rule tpa val requires thatthe store it is paired with be well-typed, while rule tpa new constrains every occurrence of thecells bound in an answer to be consistently typed.Values constitute the subclass of expressions that evaluate to themselves. They are speci edby the following grammar.Values: v ::= x j z j s v j hi j hv1; v2i j lam x: ejj cOn the basis of this de nition, we can justify the uses of the term \value" in the above presen-47 Instructions; `e e :tpi eval; `i eval e :; `e v :tpi return; `i return v :; `e v : nat ; `e e1 :; ; x :nat `e e2 : tpi case; `i case v of z ) e1 j s x ) e2 :; `e v : 1 ; `e e : 2 tpi pair; `i hv; ei : 1 2; `e v : 1 2 tpi fst; `i fst v : 1; `e v : 1 2 tpi snd; `i snd v : 2; `e v : 2 ! 1 ; `e e : 2 tpi app; `i app v e : 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .; `e v :tpi ref; `i ref v : ref; `e v : ref tpi deref; `i deref v :; `e v : ref ; `e e : tpi assign1; `i v := 1 e : 1; `e v1 : ref ; `e v2 : tpi assign2; `i v1 := 2 v2 : 1ContinuationstpK init`K init : );x : 1 `i i :`K K : ) 2 tpK lam`K K; x: i : 1 ) 2Answers`S S :; `e v : tpa val`a ([S]; v) :; c : 0 `a a : tpa new`a new c: a :Figure 8: Typing Rules in MLR, Instructions, Continuations and Answerstation. Not only does return operate only on values, but computation places a value at theheart of answers and the contents of every cell in the store is a value. See [Cer96] for a formalstatement of these properties.We model the continuation-based semantics of the imperative constructs of MLR by meansof a judgment of the formS K ` i ,! awhere i is the instruction to be executed, K is the current continuation, S is the store withrespect to which i is to be evaluated and a is the nal answer produced as the result of theevaluation.The inference rules for evaluation are given in Figures 9{10. The evaluation of most instruc-tions in the functional core of MLR does not access the store. The only exception is rule ev initwhich must package the current store together with the produced value in order to construct48 Expressions(No ev x)S K ` return z ,! a ev zS K ` eval z ,! aS K; x: return s x ` eval e ,! a ev sS K ` eval s e ,! aS K; y: case y of z ) e1 j s x ) e2 ` eval e ,! a ev caseS K ` eval case e of z ) e1 j s x ) e2 ,! aS K ` return hi ,! a ev unitS K ` eval hi ,! aS K; x: hx; e2i ` eval e1 ,! a ev pairS K ` eval he1; e2i ,! aS K; x: fst x ` eval e ,! a ev fstS K ` eval fst e ,! aS K; x: snd x ` eval e ,! a ev sndS K ` eval snd e ,! aS K ` return lam x: e ,! a ev lamS K ` eval lam x: e ,! aS K; x: app x e2 ` eval e1 ,! a ev appS K ` eval e1 e2 ,! aS K; x: eval e2 ` eval e1 ,! a ev letvalS K ` eval letval x = e1 in e2 ,! aS K ` eval [e1=x]e2 ,! aev letnameS K ` eval letname x = e1 in e2 ,! aS K ` eval [ x x: e=x]e ,! a ev xS K ` eval x x: e ,! a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .S K ` return c ,! a ev cellS K ` eval c ,! aS K; x: ref x ` eval e ,! a ev refS K ` eval ref e ,! aS K; x:deref x ` eval e ,! a ev derefS K ` eval !e ,! aS K; x: eval e2 ` eval e1 ,! a ev seqS K ` eval e1; e2 ,! aS K; x: x := 1 e2 ` eval e1 ,! a ev assignS K ` eval e1 := e2 ,! aFigure 9: Evaluation in MLR, Expressionsthe nal answer. More generally, this operation could also include garbage collection, but we donot pursue this possibility here.The inference rules concerned with non-functional expressions ofMLR and the correspondinginstructions are separated out by a dotted line in Figures 9 and 10, respectively.Cells (rule ev cell) simply evaluate to themselves, like any value. The sequencing instructione1; e2 has a simple semantics too: it evaluates e1, disregards the returned value, and then proceedwith the evaluation of e2 (rule ev seq).The evaluation of ref e computes the value v of e (rule ev ref), allocates a new cell c in the49 Valuesev initS init ` return v ,! ([S]; v)S K ` [v=x]i ,! aev contS K; x: i ` return v ,! aAuxiliary instructionsS K ` eval e1 ,! aev case 1S K ` case z of z ) e1 j s x ) e2 ,! aS K ` eval [v=x]e2 ,! aev case 2S K ` case s v of z ) e1 j s x ) e2 ,! aS K; x: return hv; xi ` eval e ,! a ev pairS K ` hv; ei ,! aS K ` return v1 ,! a ev fstS K ` fst hv1; v2i ,! aS K ` return v2 ,! a ev sndS K ` snd hv1; v2i ,! aS K; x: eval e1 ` eval e2 ,! a ev appS K ` app (lam x: e1) e2 ,! a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .(S; c = v) K ` return c ,! a ev refS K ` ref v ,! new c: aS ` c = v S K ` return v ,! a ev derefS K ` deref c ,! aS K; x: c := 2 x ` eval e ,! a ev assign 1S K ` c := 1 e ,! a(S; c = v) K ` return hi ,! a ev assign 2(S; c = v0) K ` c := 2 v ,! aReadread val(S; c = v) ` c = vFigure 10: Evaluation in MLR, Values and Auxiliary Instructionsstore, initializes it with v and nally returns c itself (rule ev ref ). Notice that rule ev ref hasalso the e ect of binding c in the nal answer by means of new c: . The argument part of !eis evaluated to a reference cell (rule ev deref) and the value associated to it is returned (ruleev deref ). We rely on the auxiliary read judgment S ` c = v in order to retrieve the value ofa cell (rule read val). The evaluation of e1 := e2 rst evaluates e1 to a store location c (ruleev assign), computes the value v of e2 (rule ev assign 1), and replaces the former contents ofc with v (rule ev assign 2). The returned value is hi.We conclude our discussion about evaluation with a few words about the interaction ofreferences and polymorphism. The question is subtle and has received great attention in the50 literature [Tof90, Har94, LW91]. Consider for example the following MLR expression:letname f = ref (lam x: x)in f := lam x: s x;!f hiAt rst sight, this expression allocates a cell and initializes it with the identity function, whichhas polymorphic type ! . In the body of letname, we rst update it to the successorfunction, of type nat ! nat, and then apply it to hi, of type 1. Clearly, something is wrong,but the typing rules of MLR accept the program above as a correct expression of type 1. Is therea aw in the de nition of the static semantics of our language? Fortunately, no. A closer analysisreveals that, since the evaluation of letname substitutes ref (lam x: x) for every occurrencesof f in its body, the expression above reduces to:ref (lam x: x) := lam x: s x;(!ref (lam x: x)) hiEach occurrence of ref (lam x: x) evaluates to a di erent cell that is typed according to its use.The expression above would not be typable if we had used letval in place of letname.Languages with explicit type variables solve the same problem by distinguishing betweenapplicative and imperative type variables in order to avoid problems such as the above [GMW79,MTHM97, Har94]. Restricting polymorphism to values has also been proposed as a solution tothis problem [Tof90] and has been adopted in the current de nition of Standard ML [MTHM97].This language o ers only one form of let, but it takes di erent courses of action depending onwhether it de nes a value or an arbitrary expression. Our treatment is slightly more generalsince it makes the call-by-name semantics of letname directly available: for example, the aboveexpression does not type-check in SML.Type PreservationWe conclude this section with the statement of the type preservation theorem for MLR and ofthe lemmas it depends on. For reasons of space, we will not formalize the proof of these resultsin LLF . The interested reader can nd an encoding of this proof in our linear logical frameworkin [Cer96].The type preservation theorem states that the type of an expression does not change as theresult of evaluation. The proof of the type preservation theorem relies on a number of auxiliarylemmas. The rst is weakening : whenever an expression is well-typed in a given context andstore, it remains well-typed under further assumptions and additional cells. This is easily provedby induction on typing derivations.The second auxiliary property we need is the substitution lemma: it states that free variablesin a well-typed expression can be substituted for expressions of the same type and the resultwill be well-typed.51 Lemma 3.1 (Substitution)i . If T :: ; ; x : 0 `e e : and ; `e e0 : 0, then ; `e [e0=x]e : .ii . If T :: ; ; x : 0 `i i : and ; `e e0 : 0, then ; `i [e0=x]i : .Proof.We proceed by induction on the structure of T .2XAs in the functional case, type preservation ensures that the type of an expression is identicalto the type of its value. Intermediate evaluation steps require us to take into account arbitrarycontinuations and stores. We have the following generalization.Theorem 3.2 (Type preservation)If S K ` i ,! a with ; `i i : , `K K : ) 0 and `S S : , then `a a : 0.Proof.We proceed by induction on the structure of a derivation of the evaluation judgment andinversion on the derivations of the typing judgments.2XThe type preservation result is formalized as follows at the top level of evaluation.Corollary 3.3 (Type preservation)If init ` eval e ,! a with ; `e e : , then `a a : .23.3 Representation in LLFIn this subsection, we give an LLF representation of the syntax ofMLR, of its static and dynamicsemantics and show how to exploit the resulting encoding of computations. The representationwe propose is a natural extension of the LF code for Mini-ML found in the literature [MP91]. Inparticular, it retains its structure, its elegance, and the ease of proving its adequacy with respectto the informal presentation we just concluded. We describe the main issues in the representationby displaying fragments of the code and a limited number of adequacy statements. A completetreatment can be found in Appendix A. It is interesting to compare the result of our encodingwith similar endeavors in the literature.VanInwegen used the HOL theorem prover [GM93] to verify properties about a substantialportion of Standard ML [Van96]. She adopted a brute-force approach to the meta-representationproblem, encoding, for example, contexts as terms. This choice resulted in a complex represen-tation, and only partial achievement of the main goal of this endeavor: a formal proof of typepreservation for that language. Although on a much simpler fragment, our use of higher-orderabstract syntax, of parametric and hypothetical judgments, and of the linear features of LLFavoids these di culties completely.Chirimar used Forum [Mil94] to represent a language similar to MLR with the addition ofexceptions and continuations [Chi95], but without any emphasis on typing. He took advantage52 of the higher-order nature of Forum and of its linear constructs. The resulting program is aselegant as our code and is proved adequate with respect to the informal speci cation of the objectlanguage. The absence of proof-terms in Forum prevents the direct manipulation of object-levelderivations and no attempt is made to use that meta-language to investigate meta-theoreticproperties such as type preservation.SyntaxThe representation of the syntactic level of MLR is based on higher-order abstract syntax anddoes not require the expressive power of the linear constructs of LLF . It lies therefore in theLF fragment of this language.As is normally done in LF, every syntactic category of the object language is mapped to adistinguished base type. The type families necessary to encode the syntactic categories of MLRare given by the following declarations:exp : type:cell : type:tp : type:store : type:instr : type:cv: type:cont : type:answer : type:The four declarations on the left encode expressions, instructions, types, and continuations.The four on the right are needed to represent the imperative features of MLR programs. cellcorresponds to the lexical category of memory cells. cv and store will be used to represent thestore. Finally, answer encodes nal answers.We encode the abstract syntax of MLR expressions, as described in the grammar of Sec-tion 3.3, by means of the representation function p q. This function maps every production toan LLF object constant that, when applied to the representation of the subexpressions that itrelates, yields an object that has type exp. The function p q is inductively de ned on the left-hand side of the table below (we have separated out the treatment of the imperative constructs);53 its right-hand side gives the type of the constants used in the encoding.pxq = xpzq = zz: exp.ps eq = s peqs: exp -> exp.pcase eof z ) e1j s x )e2q = case peqpe1q([x:exp]pe2q)case : exp-> exp-> (exp -> exp) -> exp.phiq = unitunit : exp.phe1; e2iq = pair pe1q pe2qpair : exp -> exp -> exp.pfst eq = fst peqfst: exp -> exp.psnd eq = snd peqsnd: exp -> exp.plam x: eq = lam ([x:exp]peq)lam: (exp -> exp) -> exp.pe1 e2q = app pe1q pe2qapp: exp -> exp -> exp.pletval x = e1in e2 q = letval pe1q([x:exp]pe2q)letval : exp-> (exp -> exp) -> exp.pletname x = e1in e2 q = letname pe1q([x:exp]pe2q) letname : exp-> (exp -> exp) -> exp.p x x: eq = fix ([x:exp]peq)fix: (exp -> exp) -> exp.pcq = rf crf: cell -> exp.pref eq = ref peqref: exp -> exp.p!eq = !peq!: exp -> exp.pe1 := e2q = assign pe1q pe2qassign : exp -> exp -> exp.pe1; e2q = seq pe1q pe2qseq: exp -> exp -> exp.The representation of most expressions re ects directly the abstract syntax ofMLR. We takeadvantage of higher-order abstract syntax in the representation of cells, variables, and bindingconstructs of MLR. Variables are encoded as LLF variables (of type exp). The fact that anobject-level construct binds a variables x in a sub-expression e is then modeled by using the-abstraction of LLF in order to bind x in peq. Cells appear as hypotheses c : cell in thecontext of LLF, similarly to free variables. Their representation as expressions is mediated bythe constant rf, which maps entities of type cell to objects of type exp.As an example, consider again the following MLR expression from the previous section:letname f = ref (lam x: x)in f := lam x: s x;!f hiIt is represented by the following LLF term of type exp:54 letname(ref (lam ([x] x)))([f] (seq(assign f (lam ([x] (s x))))(app (! f) unit)))))The faithfulness of this representation with respect to the object level syntax of expressionsconsists of a number of properties that we summarize in the following adequacy theorem, wherecorresponds to the signature in Appendix A:Theorem 3.4 (Adequacy of the representation of MLR expressions)The function p q above is a compositional bijection between MLR expression with free vari-ables among x1; : : : ; xn and cells c1; : : : ; cm, and canonical LLF objectsM such that the judgmentx1 :exp; : : : ; xn :exp; c1 :cell; : : : ; cm :cell ̀ M * expis derivable.2Compositionality in this statement means that the representation function commutes with sub-stitution, i.e., that for every MLR expression e and e0, p[e0=x]eq = [pe0q=x]peq. It con rms thecorrect application of higher-order abstract syntax in our encoding. Note that compositionalityis not needed for cells since they are never subject to substitution.Due to the complexity of our object language, we do not display the simple but long andsomewhat tedious inductive proof of this statement. The interested reader is referred to [Cer96]for a full treatment; the proof of a di erent adequacy statement is sketched at the end of thissection. The techniques used in order to prove adequacy theorems for LLF encodings naturallyextends the methods successfully applied for years in the more restricted setting of LF . Inparticular, they retain their simplicity in our richer applicative arena. This contrasts with otherproposals, e.g., the treatment of linearity in LF itself [Pfe94b], where adequacy theorems havecomplex proofs even for simple object languages.Types, instructions and continuations are represented in a similar way. The LLF declarationsfor the constants needed in their encoding can be found in Appendix A. We omit displayingthe statements of the respective adequacy theorems since they do not introduce new concepts.They can be found in [Cer96].MLR makes a dual usage of the collection of cell-value pairs that we informally referred toas its store: as an repository from which to retrieve the value associated with a cell duringevaluation (the proper store we indicated as S), and as a term to be eventually returned withthe nal answer (the rei ed store we denoted [S]). We will correspondingly have two distinctLLF representations of the store. We will discuss the internal encoding pSq of a proper storeS when considering evaluation. A rei ed store [S] is given the following external represention55 p[S]q:p[ ]q = estoreestore : storep[S; c = v]q = with p[S]q (holds c pvq) with : store -> cv -> store.holds : cell -> exp -> cv.Here and in the following, we systematically overload the notation p q used for expressions todenote the various representation function that are required in this example. The nature of itsargument should always permit disambiguating which speci c function we are refering to in eachcase.The representation of answers directly expresses the grammatical rules:p([S]; v)q = close p[S]q pvqclose : store -> exp -> answer.pnew c: aq = new ([c:cell]paq) new : (cell -> answer) -> answer.The declarations for these constants are repeated in Appendix A. The adequacy theoremsthat link them to the syntax of MLR are reported in [Cer96].Static SemanticsOn the basis of the above encoding of the syntax of MLR, we will now describe the meta-representation of the static semantics of this language.As for syntax, the representation of the static semantics of MLR does not rely on the linearfeatures of LLF . The resulting code lies therefore in the Elf fragment of our logical framework.We have the following declarations for the type families that model the various typing judgmentspresented in Section 3.2: tpe : exp -> tp -> type.tpi : instr -> tp -> type.tpK : cont -> tp -> tp -> type.tpc : cell -> tp -> type.tpS : store -> type.tpa : answer -> tp -> type.Again we have separated out the declarations that su ce for the functional core of MLR fromthe type families that are required to handle the imperative aspects of this language. The rstthree represent the typing judgments for expressions, instructions, and continuations, while tpSand tpa encode respectively the store and answer typing judgments, and tpc records the typeof individual cells in the store.We illustrate the representation of the static semantics of MLR by displaying how to encodea typing derivation for expressions. The remaining typing judgments are treated similarly andthe resulting LLF declarations are presented in Appendix A.56 In Section 3.2, we denoted the fact that an expression e has type assuming the types givenin for its free variables and the types given in for its reference cells as the hypotheticaljudgment ; `e e : . We represent the schematic form of this judgment by means of the LLFtype family tpe. This family accepts two parameters: the representation of an expression andthe representation of a type. Therefore, the instance above will be encoded as the LLF basetype tpe peq p q. The context is taken into consideration only when checking that thisterm is indeed derivable in LLF . Then, we will encode each pair xi : i in by means of an LLFhypothesisti : tpe xi p iqwhere the free variable xi is declared as an expression (xi :exp). Similarly, we encode every itemcj : 0j in as the (intuitionistic) assumptiont0j : tpc cj p 0jqin the context of LLF, where cj is declared as a cell (cj : cell). Note that tpc only serves thepurpose of making typing assumptions for cells. We write p q and p q for the encoding we justoutlined for the context and the store context , respectively.The inference rules de ning the derivability of the typing judgment for MLR are encodedaccording to the technique presented in Section 3.1. We consider two rules as additional exam-ples, the remaining declarations can be found in Appendix A. Rule tpe z associates the typenat to the numeral z. We represent it by means of the LF constant tpe z that relate z to nat:ptpe z; `e z : nat q=tpe z : tpe z nat.Rule tpe case speci es how to type-check a conditional expression. We repeat it from Figure 7:; `e e : nat ; `e e1 :; ; x :nat `e e2 : tpe case; `e case e of z ) e1 j s x ) e2 :This rule has multiple premises. It is hypothetical because the rightmost premise inserts theassumption x :nat into the context. It is also parametric since the variable x is bound in the caseconstruct, but it appears as a new symbol both in the added hypothesis and in the expressione2 type-checked by the rightmost premise. We represent this rule by means of the LLF constanttpe case and encode its structure in the associated type. We have the declaration:tpe_case : tpe E nat-> tpe E1 T-> ({x:exp} tpe x nat -> tpe (E2 x) T)-> tpe (case E E1 ([x:exp] E2 x)) T.Notice the quanti cation over x and the embedded implication with antecedent tpe x nat in theencoding of the third premise. In this declaration, the LLF variables E, E1, E2 and T correspondto the schematic variables e, e1, e2 and respectively. They are implicitly quanti ed at thefront of the declaration.57 It is worth noticing that there is no declaration in correspondence of rule tpe x:tpe x; ; x : `e x :Since assumptions are represented directly in the context of LLF, a judgment of the formtpe x T , where T is the representation of some concrete type , will be validated by access-ing the context of LLF rather than the signature, and succeed precisely when tx : tpe x Tappears in it as an assumption. Similar considerations hold for reference cells.We have now the means for representing derivations of expression typing judgments. Theadequacy theorem below ensures that whenever T is a (valid) derivation for the MLR judgment; `e e : , it is a canonical inhabitant of the LLF type tpe peq p q with respect to theproper encoding of and , and vice versa.Theorem 3.5 (Adequacy of the representation of MLR expression typing)Given an MLR expression e and a type , there is a compositional bijection p q betweenderivations of; `e e :and LLF objects M such that p q; p q ̀ M * tpe peq p qis derivable.2Dynamic SemanticsUnlike syntax and static semantic, the representation of evaluation relies heavily on the linearfeatures of LLF . It is based on the following four type families:ev: cont -> instr -> answer -> type.contains : cell -> exp -> type.collect : store -> type.read: cell -> exp -> type.which we will describe in turn.Assuming the appropriate representation functions p q for continuations, instructions, andanswers, we model the continuation-based judgment S K ` i ,! a as the LLF base typeev pKq piq paq. The store S mentioned by this judgment is represented in a distributedfashion in the context of LLF . Each item c = v in S is modeled by two assumptions: rst ofall, we need to declare c as a cell and we do so by means of the assumption c : cell, second,we represent the fact that the current contents of c is v by a linear hypothesis of the formh :̂ contains c pvq. The rst assumption should clearly be intuitionistic since c may be58 mentioned many times in K, i, a and S. In contrast, the second must be linear since assignmentupdates the value associated with a cell destructively. If h were an intuitionistic hypothesis, wewould have no means of prohibiting the old value from being accessed. In summary, we associateto every proper store S = (c1 = v1; : : : ; cn = vn) the following internal representation:pSq = c1 :cell;: : : ; cn :cell;h1 :̂ contains c1 pv1q; : : : ; hn :̂ contains cn pvnqFour rules in the deductive system for continuation-based evaluation presented in Figures 9{10 access the store directly: ev ref , ev assign 2, ev deref , and ev init. We will illustratethe use of the linear features of LLF on their encoding. However, in order to gain familiaritywith our representation technique, we will rst analyze rule ev z. All other inference gures aretreated similarly to this rule. The complete code is displayed in Appendix A.A bottom up reading of rule ev z, shown below on the left, speci es that evaluating z simplyamounts to returning it as a value. We represent this rule by means of the declaration for theconstant ev_z shown on the right:pS K ` return z ,! a ev zS K ` eval z ,! aq = ev_z : ev K (return z) A-o ev K (eval z) A.The linear arrow in the representation of rule ev z enables its antecedent and its consequentto access the same linear assumptions in the context. This accounts for the fact that thepremise and the conclusion of this rule mention the same store. Had we used an intuitionisticimplication, the antecedent (and therefore the whole expression) would have been applicableonly with contexts deprived of any linear assumptions, corresponding to empty stores.Rule ev ref , repeated below on the left, creates a new location c in the store and initializesit with the argument v of ref . Its representation on the right models these actions on thecontext of LLF : the new cell is intuitionistically assumed when processing the dependent type{c:cell}, while the resolution of the embedded linear implication has the e ect of assertingcontains c V in the linear part of the context. Since this assumption is made linearly, it willbe possible to remove it from the context, for example in order to update the value containedin c in response to an assignment. Notice how the newly created cell c is bound in the nalanswer.p(S; c = v) K ` return c ,! a ev refS K ` ref v ,! new c: aq=" ev_ref* : ({c:cell} contains c V-o ev K (return (rf c)) (A c))-o ev K (ref* V) (new ([c:cell] A c)).Of the three rules that realize assignment in MLR, only ev assign 2 accesses the store. Thedeclaration ev assign*2 below mimics the destructive update of the contents of the cell c (writ-ten C in the clause) in the store in two steps. First the old value is retrieved by contains C V'.Since it appears as a linear assumption, accessing it causes its removal from the linear context ofLLF . Since the other antecedent of this clause is reached through the multiplicative connective59 , the remaining linear hypotheses will be passed to it. This term inserts the new value v(i.e. V) of c in the representation of the store by means of the antecedent contains C V of theembedded linear implication.p(S; c = v) K ` return hi ,! a ev assign 2(S; c = v0) K ` c := 2 v ,! aq =ev_assign*2 : (contains C V -o ev K (return unit) A)-o (contains C V' -o ev K (assign*2 (rf C) V) A).Dereferencing a cell c is naturally modeled in LLF through the use of the additive operators ofour language. In order to encode rule ev deref , we need two copies of the store representation:one to retrieve the contents of c, and one to proceed with the evaluation. This is immediatelyachieved by means of the additive conjunction of LLF . We have the following declaration:pS ` c = v S K ` return v ,! a ev derefS K ` deref c ,! aq = " ev_deref* : (read C V& ev K (return V) A)-o ev K (deref* (rf C)) A.The conjunct read C V, which implements the read judgment S ` c = v, looks up its copy ofthe linear context in search of the assumption contains C V and relies on the additive unit ofLLF, written , to discard the rest. This technique is generally applicable to every situationthat involves looking up the encoding of volatile information. The de nition of read consists ofa single clause encoding rule read val:pread val(S; c = v) ` c = vq = " read_val : contains C V-o -o read C V.We could have alternatively modeled dereferencing similarly to assignment by rst accessing thelinear assumption contains C V directly. In order to balance its consequent removal from thelinear context of LLF, this same assumption should be re-entered in the context before returningthe value V. We would have the following declaration:ev_deref*' : (contains C V -o ev K (return V) A)-o (contains C V -o ev K (deref* (rf C)) A).Although it achieves a similar e ect, this declaration does not encode rule ev deref , orread val, or any combination of these rules. Instead`, it is a transliteration of the followinginference rule, which we could have used to formalize dereferencing:(S; c = v) K ` return v ,! a ev deref 0(S; c = v) K ` deref c ,! aFinally, rule ev init pairs up the store and the nal value in order to produce the answer. Wemodel this behavior by means of the auxiliary procedure collect which translates the internal60 representation of the store S, as linear LLF assumptions, to its external representation [S], asan object of type store.pev initS init ` return v ,! ([S]; v)q = ev_init : collect S-o ev init (return V) (close S V).The code for collect is displayed below.col_empty : collect estore.col_cv : contains C V-o collect S-o collect (with S (holds C V)).Since the use of multiplicatives removes the assumptions contains C V as they are retrieved,each recursive access to collect adds a di erent item to the external representation of the store.Clause col empty is provable only when the linear part of the context of LLF is empty, andtherefore only when the complete store of MLR has been externalized.The e ectiveness of the representation we just illustrated relies on the ability to removeobjects from the context of LLF . Using LF on this problem would have produced awkward en-codings with prohibitive consequence for the development of the meta-theory of MLR [Cer96]:a rst alternative would have relied entirely on the external representation of the store, im-plementing all the operations required to access and modify it explicitly. A second alternativewould be to proceed as we did, with the tedious addition of declarations aimed at checking thelinearity of the resulting derivations a posteriori.We will now make the above motivating discussion more precise. The faithfulness of ourrepresentation of evaluation is expressed by the following adequacy theorem.Theorem 3.6 (Adequacy of the representation of MLR evaluation)Given an MLR continuation K, an instruction i, a store S and an answer a, where K, i,S and a are closed except for the possible presence of free cells, there is a bijection p q betweenderivations ofS K ` i ,! aand LLF objects M such that pSq ̀ M * ev pKq piq paqis derivable.2In order to prove the above theorem, we will decompose it into four parts. Again, is thesignature contained in Appendix A. We need to prove the following properties:Functionality: p q is a total function from MLR evaluation derivations to LLF objects over.61 Soundness: The representation of a derivation of a given MLR evaluation judgment is an LLFobject whose type is the representation of this judgment.Completeness: Whenever a canonical LLF object over inhabits the type correspondingto the encoding of an MLR evaluation judgment, this object is the representation of aderivation of that judgment.Bijectivity: p q is a bijection between evaluation derivations in MLR and canonical LLF ob-jects whose type encodes the corresponding evaluation judgment.Di erently from expressions and typing derivations, the representation function p q is triviallycompositional (it involves closed expressions only), otherwise we should prove it as an additionalproperty.Detailed proofs of these properties are long and rather tedious, although conceptually simple.We will sketch them by using the declaration for ev_assign*2 as a representative case. In orderto do so, we repeat it complete with the -quanti ers we omitted in the above presentation:ev_assign*2 : {C:cell}{V':exp}{V:exp}{K:cont}{A:answer}(contains C V -o ev K (return unit) A)-o (contains C V' -o ev K (assign*2 (rf C) V) A).In the speci c case of this example, it is convenient to state and prove the functionality andsoundness properties together. We have the following result:Lemma 3.7 (Functionality and soundness of the representation of MLR evaluation)Given a store S, a continuation K, an instruction i and an answer a, where K, i, S and aare closed except for the possible presence of free cells, for every derivation E of the judgmentS K ` i ,! a, pEq is de ned and unique, and the LLF judgmentpSq ̀ pEq * ev pKq piq paqis derivable.Proof.This proof proceeds by induction on the structure of the derivation E . We illustrate only thecase in which it ends with an application of rule ev assign 2. Therefore,E =E 0(S ; c = v) K ` return hi ,! a ev assign 2(S ; c = v0) K ` c := 2 v ,! awhere S = (S ; c = v0) and i = (c := 2 v). Let us also denote as S0 the store (S ; c = v). Noticethat pSq = pS0q.62 By induction hypothesis on E 0, we deduce that there is a unique LLF object M 0 such thatM 0 = pE 0q and there is a derivation of pS0q ̀ M 0 * ev pKq (return unit) paq.Iterated applications of the LLF rule oa iapp are used to instantiate the arguments of thedeclaration for ev assign*2. Indeed, there is an atomic derivationA00 of the following judgment:pS0q ̀ ev assign*2 c pv0q pvq pKq paq #2664(contains c pvqev pKq (return unit) paq)(contains c pv0qev pKq (assign*2 (rf c) pvq) paq) 3775Let t :̂contains c pvq be the assumption in pS0q corresponding to the pair (c = v) in S0. Wecan abstract it over M 0 in the LLF derivation for this object, obtaining a derivation C00 of thejudgment pSq ̀ ̂t :contains c pvq:M 0 * (contains c pvq ev pKq (return unit) paq),where pSq di ers from pS0q only by the removal of assumption t. Since pS0q = pSq , we canthen apply rule oa lapp to A00 and C00, obtaining a derivation A0 of:pSq ̀ ev assign*2 c pv0q pvq pKq paq^(̂t :contains c pvq:M 0)# (contains c pv0qev pKq (assign*2 (rf c) pvq) paq)Let t0 :̂ contains c pv0q be the assumption in pSq corresponding to the pair (c = v0) inS. Then, there is a derivation C0 of the LLF judgment (pSq ; t0 :̂ contains c pv0q) ̀ t0 *contains c pv0q. We can then apply rule oa lapp again to A0 and C0 to obtain a derivation Aof:pSq ; t0 :̂ contains c pv0q ̀ ev assign*2 c pv0q pvq pKq paq (̂̂t :contains c pvq:M 0) t̂0# ev pKq (assign*2 (rf c) pvq) paqIn order to understand this formula, observe that pSq = (pSq ; t0 :̂ contains c pv0q). We nowapply rule oc a to A to get the desired canonical derivation.At this point, it is enough to notice that the LLF object M appearing on the left of thearrow in this canonical judgment is the representation of the MLR derivation E above and thatthe type on the right of the arrow is the representation of its type. It is also easy to ascertainthat M is unique, given the uniqueness of M 0.2XWe now consider the completeness of the encoding of MLR evaluation derivations. We havethe following lemma.Lemma 3.8 (Completeness of the representation of MLR evaluation)Given a store S, a continuation K, an instruction i and an answer a, where K, i, S anda are closed except for the possible presence of free cells, for every LLF object M such that thejudgmentpSq ̀ M * ev pKq piq paqhas a derivation C, there is a derivation E of the MLR judgment S K ` i ,! a such thatM = pEq.63 Proof.We proceed by induction on the structure of M . Since the type in C is a base type, M caneither be a constant, a variable or start with a destructor. Then M has the following structure:M = cM M1 : : : Mn;where cM is a constant in of some appropriate type, represents either linear or intuitionisticapplication, and M1; : : : ;Mn are objects of some type. The proof now distinguishes cases on thebasis of possible constants cM . We consider only the case in which this constant is ev assign*2.If cM is ev assign*2, then it must be the case that i = (c := 2 v) for some cell c andexpression v, and moreoverM = ev assign*2 cMv0 pvq pKq paq^M^Mt0By analyzing the types of the objectsMv0 , M andMt0 , we deduce that there is an expres-sion v0 such thatMv0 = pv0q, that M = ̂t : contains c pvq:M 0 for some term M 0 of typeev pKq (return unit) paq, and thatMt0 = t0 for some linear assumption t0 :̂contains c pv0q.Moreover, we have that S = (S ; c = v0).We can apply the induction hypothesis on M 0 relative to a store representation that di ersfrom pSq by the replacement of assumption t0 with t. The corresponding MLR store S0 is(S ; c = v). We deduce in this way that there exist a derivation E 0 of the judgment (S ; c =v) K ` return hi ,! a. An application of rule ev assign 2 su ces to obtain the desiredderivation.2XWe conclude the treatment of the adequacy of the representation of MLR evaluation deriva-tions by showing that the function p q is indeed bijective.Lemma 3.9 (Bijectivity of the representation of MLR evaluation)Given a store S, a continuation K, an instruction i and an answer a, where K, i, S anda are closed except for the possible presence of free cells, the representation function p q is abijection between derivations E of MLR the judgment S K ` i ,! a, and LLF objects of typeev pKq piq paq in the context pSq.Proof.Lemma 3.7 establishes that p q is a total function from the set ofMLR derivations mentionedin the statement to the speci ed set of LLF objects. By the completeness lemma, we deducethat this function is surjective. It therefore remains solely to prove that it is also injective. Giventwo derivations E1 and E2 such that pE1q = pE2q, the proof that E1 = E2 proceeds by inductionon these derivations.2XA derivation E for an evaluation judgment S K ` i ,! a is a trace of the computationthat an continuation-based MLR interpreter performs when evaluating the instruction i and thecontinuationK to the nal answer a with respect to the store contents S. According to the above64 adequacy theorem, such derivations are faithfully represented by the termsM inhabiting the LLFtype encoding this judgment. We conclude this section by illustrating how to take advantage ofthis internal representation of MLR computations. We only give a small example here|moreinteresting examples such as the proof of type preservation, or a cut elimination procedure forclassical linear logic can be found in [Cer96]. Speci cally, we will give LLF declarations thatpermit counting the number of reference cells dynamically allocated during the evaluation.In order to achieve this purpose, we rst give the following declarations for natural numbers:num : type.zero : num.succ : num -> num.The counting judgment relates an MLR computation to the number of cells it allocates. Itis represented by the following type familycount : ev K I A -> num -> type.We implement the counting procedure in LLF by unfolding the representation of an MLRcomputation. We ignore the steps that do not allocate memory cells, but increment by one thecounter every time rule ev ref is applied. We show three declarations, corresponding to theinitialization step performed by rule ev init, the allocation of a new cell by rule ev ref , andone of the numerous cases where nothing happens (rule ev z):cnt_init : count (ev_init ^ C) zero.cnt_ref* : ({c:cell}{d:contains c V} count (C c ^ d) N)-> count (ev_ref* ^ ([c:cell] [d^contains c V] C c ^ d)) (succ N).cnt_z : count C N-> count (ev_z ^ C) N.We conclude this section by displaying the LLF base type that ascertains that the MLRexpressionletname f = ref (lam x: x)in f := lam x: s x;!f hiallocates two cells. The concrete syntax of this term is as follows, where the rst argument isthe representation of the evaluation of the above expression:count(ev_letname ^(ev_seq ^(ev_assign ^(ev_ref ^(ev_lam ^(ev_cont ^(ev_ref* ^([c1:cell] [Cn1^contains c1 (lam [x:exp] x)] ev_cont ^(ev_assign*1 ^65 (ev_lam ^(ev_cont ^(ev_assign*2 ^([Cn1'^contains c1 (lam [x:exp] s x)] ev_cont ^(ev_app ^(ev_deref ^(ev_ref ^(ev_lam ^(ev_cont ^(ev_ref* ^([c2:cell] [Cn2^contains c2 (lam [x:exp] x)] ev_cont ^(ev_deref* ^(read_val ^ () ^ Cn2 ,ev_cont ^(ev_app* ^(ev_unit ^(ev_cont ^(ev_unit ^(ev_init ^ (col_cv ^ Cn2 ^ (col_cv ^ Cn1' ^ col_empty))))))))))))))))) ^ Cn1)))))))))))) (succ (succ zero))4 Conclusion and Future WorkIn this paper, we have presented the linear logical framework LLF as an extension of LF withinternal support for the representation of state-based problems. We have demonstrated itsexpressive power by providing a usable representation of the syntax and the semantics of animperative variant of the functional programming language Mini-ML; space reasons preventedus from extending this encoding to aspects of the meta-theory of this language, such as a proofof its type preservation property [Cer96]. Additional substantial case studies we have completedinclude the formalization of a proof of cut elimination for classical linear logic, translationsbetween minimal linear natural deduction and sequent calculus, as well as a number of puzzlesand solitaires. The interested reader may access them on the World-Wide Web at [CP] orin [Cer96].The representation language of LLF,&>, conservatively extends LF 's with constructsfrom linear logic. We can think of it as the type theory obtained from the type constructors >, &,, and . This choice of constructors is complete in the sense that they su ce to represent fullintuitionistic or classical linear logic. Further, adding any other linear connective as a free typeconstructor destroys the property that usable canonical forms exist by introducing commutingconversions. This property is necessary in the proofs of adequacy theorems for encodings andalso for the interpretation of LLF as an abstract logic programming language.The meta-representation methodology of LLF extends the judgments-as-types techniqueadopted in LF with a direct way to map state-related constructs and behaviors onto the linearoperators of&>. The resulting representations retain the elegance and immediacy thatcharacterize LF encodings, and the ease of proving their adequacy.66 LLF maintains the computational nature of LF as an abstract logic programming language.The implementation of LLF combines the experience with higher-order logic programming lan-guages gained with Elf [Pfe91, Pfe94a], an older realization of LF, on previous research workon linearity as in the language Lolli [HM94, CHP96], and on new experimental term represen-tation [CP97b] and compilation [Cer98] techniques. Among the new problems is the necessityof performing higher-order uni cation between linear terms [CP97a].LLF generalizes other formalisms based on linear logic such as Forum [Mil94] by makinglinear objects available for representations, by permitting proof terms and by providing lineartypes. It is closely related to the system RLF of Ishtiaq and Pym [IP98], which allows de-pendencies on linear variables, but does not have > as an operator. Linear dependent typesare potentially useful but not essential in our experience, while > is a necessary tool in manyrepresentation problems. The meta-theory of LLF appears signi cantly simpler than that ofRLF, a fact that might imply that proving the adequacy of an encoding may be substantiallymore complex in this formalism. Finally, our approach is orthogonal to general logics in thestyle of LU [Gir93].In the near future, we intend to gain experience with the use of LLF as a representationlanguage by encoding state-based deductive systems such as imperative programming languagesconstructs, hardware systems, security protocols, and real-time systems. The availability ofan implementation will be of great help in doing so since it will enable us to concentrate onhigh-level representation issues. We would also like to extend the tools available in Twelf [PS],notably the theorem proving component of this system [PS98], to handle the possibilities o eredby the linear operators of LLF . Finally, We are interested in investigating a generalization ofthe type constructors & and of&> to linear and types, respectively, although itcurrently appears that this would greatly complicate the type theory while it is not clear howmuch would be gained.AcknowledgementsWe would like to thank the anonymous referees for their valuable comments, which helpedimprove this paper as a whole.A Formalization of MLRA.1 Syntaxexp : type.tp: type.instr : type.cont : type.cell : type.cv: type.store : type.67 answer : type.% Expressionsz: exp.s: exp -> exp.case : exp -> exp -> (exp -> exp) -> exp.unit : exp.pair : exp -> exp -> exp.fst: exp -> exp.snd: exp -> exp.lam: (exp -> exp) -> exp.app: exp -> exp -> exp.letval : exp -> (exp -> exp) -> exp.letname : exp -> (exp -> exp) -> exp.fix: (exp -> exp) -> exp.rf: cell -> exp.ref: exp -> exp.!: exp -> exp.seq: exp -> exp -> exp.assign : exp -> exp -> exp.% Typesnat : tp.one : tp.cross : tp -> tp -> tp.arrow : tp -> tp -> tp.tref : tp -> tp.% Instructionseval: exp -> instr.return : exp -> instr.case* : exp -> exp -> (exp -> exp) -> instr.pair* : exp -> exp -> instr.fst*: exp -> instr.snd*: exp -> instr.app*: exp -> exp -> instr.ref*: exp -> instr.deref* : exp -> instr.68 assign*1 : exp -> exp -> instr.assign*2 : exp -> exp -> instr.% Continuationsinit : cont.klam : cont -> (exp -> instr) -> cont.% Storeestore : store.with : store -> cv -> store.holds : cell -> exp -> cv.% Answersclose : store -> exp -> answer.new : (cell -> answer) -> answer.A.2 Typingtpc : cell -> tp -> type.tpe : exp -> tp -> type.tpi : instr -> tp -> type.tpK : cont -> tp -> tp -> type.tpS : store -> type.tpa : answer -> tp -> type.% Expressionstpe_z: tpe z nat.tpe_s: tpe E nat-> tpe (s E) nat.tpe_case : tpe E nat-> tpe E1 T-> ({x:exp} tpe x nat -> tpe (E2 x) T)-> tpe (case E E1 E2) T.tpe_unit : tpe unit one.tpe_pair : tpe E1 T1-> tpe E2 T2-> tpe (pair E1 E2) (cross T1 T2).tpe_fst: tpe E (cross T1 T2)-> tpe (fst E) T1.69 tpe_snd: tpe E (cross T1 T2)-> tpe (snd E) T2.tpe_lam: ({x:exp} tpe x T1 -> tpe (E x) T2)-> tpe (lam E) (arrow T1 T2).tpe_app: tpe E1 (arrow T2 T1)-> tpe E2 T2-> tpe (app E1 E2) T1.tpe_letval : tpe E1 T1-> ({x:exp} tpe x T1 -> tpe (E2 x) T2)-> tpe (letval E1 E2) T2.tpe_letname : tpe (E2 E1) T-> tpe (letname E1 E2) T.tpe_fix: ({x:exp} tpe x T -> tpe (E x) T)-> tpe (fix E) T.tpe_cell : tpc C T-> tpe (rf C) (tref T).tpe_ref: tpe E T-> tpe (ref E) (tref T).tpe_deref : tpe E (tref T)-> tpe (! E) T.tpe_seq: tpe E1 T1-> tpe E2 T2-> tpe (seq E1 E2) T2.tpe_assign : tpe E1 (tref T)-> tpe E2 T-> tpe (assign E1 E2) one.% Instructionstpi_eval : tpe E T-> tpi (eval E) T.tpi_return : tpe V T-> tpi (return V) T.tpi_case* : tpe V nat-> tpe E1 T-> ({x:exp} tpe x nat -> tpe (E2 x) T)-> tpi (case* V E1 E2) T.tpi_pair* : tpe V T1-> tpe E T2-> tpi (pair* V E) (cross T1 T2).tpi_fst* : tpe V (cross T1 T2)-> tpi (fst* V) T1.tpi_snd* : tpe V (cross T1 T2)-> tpi (snd* V) T2.tpi_app* : tpe V (arrow T2 T1)70 -> tpe E T2-> tpi (app* V E) T1.tpi_ref* : tpe V T-> tpi (ref* V) (tref T).tpi_deref* : tpe V (tref T)-> tpi (deref* V) T.tpi_assign*1: tpe V (tref T)-> tpe E T-> tpi (assign*1 V E) one.tpi_assign*2: tpe V1 (tref T)-> tpe V2 T-> tpi (assign*2 V1 V2) one.% ContinuationstpK_init : tpK init T T.tpK_lam : ({x:exp} tpe x T1 -> tpi (I x) T)-> tpK K T T2-> tpK (klam K I) T1 T2.% StoretpS_empty : tpS estore.tpS_with : tpS S-> tpc C T-> tpe V T-> tpS (with S (holds C V)).% Answerstpa_close : tpS S-> tpe V T-> tpa (close S V) T.tpa_new : ({c: cell} tpc c T' -> tpa (A c) T)-> tpa (new A) T.A.3 Evaluation%%%
منابع مشابه
A Meta Linear Logical Framework
Over the years, logical framework research has produced various type theories designed primarily for the representation of deductive systems. Reasoning about these representations requires expressive special purpose meta logics, that are in general not part of the logical framework. In this work, we describe Lω , a meta logic for the linear logical framework LLF [CP96] and illustrate its use vi...
متن کاملLINCX: A Linear Logical Framework with First-Class Contexts
Linear logic provides an elegant framework for modelling stateful, imperative and concurrent systems by viewing a context of assumptions as a set of resources. However, mechanizing the meta-theory of such systems remains a challenge, as we need to manage and reason about mixed contexts of linear and intuitionistic assumptions. We present Lincx, a contextual linear logical framework with first-c...
متن کاملA Concurrent Logical Framework I: Judgments and Properties
The Concurrent Logical Framework, or CLF, is a new logical framework in which concurrent computations can be represented as monadic objects, for which there is an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives ⊗, 1, !, and ∃ of intuitionistic linear logic, encapsulated in a monad. LLF is itself a c...
متن کاملA Concurrent Logical Framework II: Examples and Applications
CLF is a new logical framework with an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives ⊗, 1, !, and ∃ of intuitionistic linear logic, encapsulated in a monad. LLF is itself a conservative extension of LF with the asynchronous connectives −◦, & and >. In this report, the second of two technical repor...
متن کامل{72 () a Linear Logical Framework
We present the linear type theory ?&> as the formal basis for LLF, a conservative extension of the logical framework LF. LLF combines the expressive power of dependent types with linear logic to permit the natural and concise representation of a whole new class of deductive systems, namely those dealing with state. As an example we encode a version of Mini-ML with mutable references including i...
متن کامل